Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 19.102
Filtrar
1.
J Biomed Opt ; 29(3): 037003, 2024 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-38560532

RESUMEN

Significance: Glaucoma, a leading cause of global blindness, disproportionately affects low-income regions due to expensive diagnostic methods. Affordable intraocular pressure (IOP) measurement is crucial for early detection, especially in low- and middle-income countries. Aim: We developed a remote photonic IOP biomonitoring method by deep learning of the speckle patterns reflected from an eye sclera stimulated by a sound source. We aimed to achieve precise IOP measurements. Approach: IOP was artificially raised in 24 pig eyeballs, considered similar to human eyes, to apply our biomonitoring method. By deep learning of the speckle pattern videos, we analyzed the data for accurate IOP determination. Results: Our method demonstrated the possibility of high-precision IOP measurements. Deep learning effectively analyzed the speckle patterns, enabling accurate IOP determination, with the potential for global use. Conclusions: The novel, affordable, and accurate remote photonic IOP biomonitoring method for glaucoma diagnosis, tested on pig eyes, shows promising results. Leveraging deep learning and speckle pattern analysis, together with the development of a prototype for human eyes testing, could enhance diagnosis and management, particularly in resource-constrained settings worldwide.


Asunto(s)
Aprendizaje Profundo , Glaucoma , Humanos , Animales , Porcinos , Presión Intraocular , Glaucoma/diagnóstico por imagen , Tonometría Ocular , Esclerótica
2.
Clin Orthop Surg ; 16(2): 210-216, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38562629

RESUMEN

Background: As the population ages, the rates of hip diseases and fragility fractures are increasing, making total hip arthroplasty (THA) one of the best methods for treating elderly patients. With the increasing number of THA surgeries and diverse surgical methods, there is a need for standard evaluation protocols. This study aimed to use deep learning algorithms to classify THA videos and evaluate the accuracy of the labelling of these videos. Methods: In our study, we manually annotated 7 phases in THA, including skin incision, broaching, exposure of acetabulum, acetabular reaming, acetabular cup positioning, femoral stem insertion, and skin closure. Within each phase, a second trained annotator marked the beginning and end of instrument usages, such as the skin blade, forceps, Bovie, suction device, suture material, retractor, rasp, femoral stem, acetabular reamer, head trial, and real head. Results: In our study, we utilized YOLOv3 to collect 540 operating images of THA procedures and create a scene annotation model. The results of our study showed relatively high accuracy in the clear classification of surgical techniques such as skin incision and closure, broaching, acetabular reaming, and femoral stem insertion, with a mean average precision (mAP) of 0.75 or higher. Most of the equipment showed good accuracy of mAP 0.7 or higher, except for the suction device, suture material, and retractor. Conclusions: Scene annotation for the instrument and phases in THA using deep learning techniques may provide potentially useful tools for subsequent documentation, assessment of skills, and feedback.


Asunto(s)
Artroplastia de Reemplazo de Cadera , Aprendizaje Profundo , Fracturas Óseas , Prótesis de Cadera , Humanos , Anciano , Artroplastia de Reemplazo de Cadera/métodos , Acetábulo/cirugía , Fracturas Óseas/cirugía , Fémur/cirugía , Estudios Retrospectivos
3.
PeerJ ; 12: e16952, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38563008

RESUMEN

Background: The aim of this study is to design a deep learning (DL) model to preoperatively predict the occurrence of central lymph node metastasis (CLNM) in patients with papillary thyroid microcarcinoma (PTMC). Methods: This research collected preoperative ultrasound (US) images and clinical factors of 611 PTMC patients. The clinical factors were analyzed using multivariate regression. Then, a DL model based on US images and clinical factors was developed to preoperatively predict CLNM. The model's efficacy was evaluated using the receiver operating characteristic (ROC) curve, along with accuracy, sensitivity, specificity, and the F1 score. Results: The multivariate analysis indicated an independent correlation factors including age ≥55 (OR = 0.309, p < 0.001), tumor diameter (OR = 2.551, p = 0.010), macrocalcifications (OR = 1.832, p = 0.002), and capsular invasion (OR = 1.977, p = 0.005). The suggested DL model utilized US images achieved an average area under the curve (AUC) of 0.65, slightly outperforming the model that employed traditional clinical factors (AUC = 0.64). Nevertheless, the model that incorporated both of them did not enhance prediction accuracy (AUC = 0.63). Conclusions: The suggested approach offers a reference for the treatment and supervision of PTMC. Among three models used in this study, the deep model relied generally more on image modalities than the data modality of clinic records when making the predictions.


Asunto(s)
Carcinoma Papilar , Aprendizaje Profundo , Neoplasias de la Tiroides , Humanos , Metástasis Linfática/diagnóstico por imagen , Factores de Riesgo , Neoplasias de la Tiroides/diagnóstico por imagen
4.
Opt Express ; 32(7): 12462-12475, 2024 Mar 25.
Artículo en Inglés | MEDLINE | ID: mdl-38571068

RESUMEN

Quantitative phase contrast microscopy (QPCM) can realize high-quality imaging of sub-organelles inside live cells without fluorescence labeling, yet it requires at least three phase-shifted intensity images. Herein, we combine a novel convolutional neural network with QPCM to quantitatively obtain the phase distribution of a sample by only using two phase-shifted intensity images. Furthermore, we upgraded the QPCM setup by using a phase-type spatial light modulator (SLM) to record two phase-shifted intensity images in one shot, allowing for real-time quantitative phase imaging of moving samples or dynamic processes. The proposed technique was demonstrated by imaging the fine structures and fast dynamic behaviors of sub-organelles inside live COS7 cells and 3T3 cells, including mitochondria and lipid droplets, with a lateral spatial resolution of 245 nm and an imaging speed of 250 frames per second (FPS). We imagine that the proposed technique can provide an effective way for the high spatiotemporal resolution, high contrast, and label-free dynamic imaging of living cells.


Asunto(s)
Aprendizaje Profundo , 60704 , Animales , Ratones , Mitocondrias , Gotas Lipídicas
5.
Front Immunol ; 15: 1327779, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38596674

RESUMEN

Neoadjuvant chemoimmunotherapy has revolutionized the therapeutic strategy for non-small cell lung cancer (NSCLC), and identifying candidates likely responding to this advanced treatment is of important clinical significance. The current multi-institutional study aims to develop a deep learning model to predict pathologic complete response (pCR) to neoadjuvant immunotherapy in NSCLC based on computed tomography (CT) imaging and further prob the biologic foundation of the proposed deep learning signature. A total of 248 participants administrated with neoadjuvant immunotherapy followed by surgery for NSCLC at Ruijin Hospital, Ningbo Hwamei Hospital, and Affiliated Hospital of Zunyi Medical University from January 2019 to September 2023 were enrolled. The imaging data within 2 weeks prior to neoadjuvant chemoimmunotherapy were retrospectively extracted. Patients from Ruijin Hospital were grouped as the training set (n = 104) and the validation set (n = 69) at the 6:4 ratio, and other participants from Ningbo Hwamei Hospital and Affiliated Hospital of Zunyi Medical University served as an external cohort (n = 75). For the entire population, pCR was obtained in 29.4% (n = 73) of cases. The areas under the curve (AUCs) of our deep learning signature for pCR prediction were 0.775 (95% confidence interval [CI]: 0.649 - 0.901) and 0.743 (95% CI: 0.618 - 0.869) in the validation set and the external cohort, significantly superior than 0.579 (95% CI: 0.468 - 0.689) and 0.569 (95% CI: 0.454 - 0.683) of the clinical model. Furthermore, higher deep learning scores correlated to the upregulation for pathways of cell metabolism and more antitumor immune infiltration in microenvironment. Our developed deep learning model is capable of predicting pCR to neoadjuvant chemoimmunotherapy in patients with NSCLC.


Asunto(s)
Carcinoma de Pulmón de Células no Pequeñas , Aprendizaje Profundo , Neoplasias Pulmonares , Humanos , Neoplasias Pulmonares/diagnóstico por imagen , Neoplasias Pulmonares/terapia , Carcinoma de Pulmón de Células no Pequeñas/diagnóstico por imagen , Carcinoma de Pulmón de Células no Pequeñas/terapia , Terapia Neoadyuvante , 60410 , Estudios Retrospectivos , Inmunoterapia , Tomografía Computarizada por Rayos X , Microambiente Tumoral
6.
Arkh Patol ; 86(2): 65-71, 2024.
Artículo en Ruso | MEDLINE | ID: mdl-38591909

RESUMEN

The review presents key concepts and global developments in the field of artificial intelligence used in pathological anatomy. The work examines two types of artificial intelligence (AI): weak and strong ones. A review of experimental algorithms using both deep machine learning and computer vision technologies to work with WSI images of preparations, diagnose and make a prognosis for various malignant neoplasms is carried out. It has been established that weak artificial intelligence at this stage of development of computer (digital) pathological anatomy shows significantly better results in speeding up and refining diagnostic procedures than strong artificial intelligence having signs of general intelligence. The article also discusses three options for the further development of AI assistants for pathologists based on the technologies of large language models (strong AI) ChatGPT (PathAsst), Flan-PaLM2 and LIMA. As a result of the analysis of the literature, key problems in the field were identified: the equipment of pathology institutions, the lack of experts in training neural networks, the lack of strict criteria for the clinical viability of AI diagnostic technologies.


Asunto(s)
Inteligencia Artificial , Aprendizaje Profundo , Humanos , Redes Neurales de la Computación , Algoritmos , Aprendizaje Automático
8.
Radiology ; 311(1): e232057, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38591974

RESUMEN

Background Preoperative discrimination of preinvasive, minimally invasive, and invasive adenocarcinoma at CT informs clinical management decisions but may be challenging for classifying pure ground-glass nodules (pGGNs). Deep learning (DL) may improve ternary classification. Purpose To determine whether a strategy that includes an adjudication approach can enhance the performance of DL ternary classification models in predicting the invasiveness of adenocarcinoma at chest CT and maintain performance in classifying pGGNs. Materials and Methods In this retrospective study, six ternary models for classifying preinvasive, minimally invasive, and invasive adenocarcinoma were developed using a multicenter data set of lung nodules. The DL-based models were progressively modified through framework optimization, joint learning, and an adjudication strategy (simulating a multireader approach to resolving discordant nodule classifications), integrating two binary classification models with a ternary classification model to resolve discordant classifications sequentially. The six ternary models were then tested on an external data set of pGGNs imaged between December 2019 and January 2021. Diagnostic performance including accuracy, specificity, and sensitivity was assessed. The χ2 test was used to compare model performance in different subgroups stratified by clinical confounders. Results A total of 4929 nodules from 4483 patients (mean age, 50.1 years ± 9.5 [SD]; 2806 female) were divided into training (n = 3384), validation (n = 579), and internal (n = 966) test sets. A total of 361 pGGNs from 281 patients (mean age, 55.2 years ± 11.1 [SD]; 186 female) formed the external test set. The proposed strategy improved DL model performance in external testing (P < .001). For classifying minimally invasive adenocarcinoma, the accuracy was 85% and 79%, sensitivity was 75% and 63%, and specificity was 89% and 85% for the model with adjudication (model 6) and the model without (model 3), respectively. Model 6 showed a relatively narrow range (maximum minus minimum) across diagnostic indexes (accuracy, 1.7%; sensitivity, 7.3%; specificity, 0.9%) compared with the other models (accuracy, 0.6%-10.8%; sensitivity, 14%-39.1%; specificity, 5.5%-17.9%). Conclusion Combining framework optimization, joint learning, and an adjudication approach improved DL classification of adenocarcinoma invasiveness at chest CT. Published under a CC BY 4.0 license. Supplemental material is available for this article. See also the editorial by Sohn and Fields in this issue.


Asunto(s)
Adenocarcinoma del Pulmón , Adenocarcinoma , Aprendizaje Profundo , Neoplasias Pulmonares , Humanos , Femenino , Persona de Mediana Edad , Estudios Retrospectivos , Adenocarcinoma del Pulmón/diagnóstico por imagen , Adenocarcinoma/diagnóstico por imagen , Tomografía Computarizada por Rayos X , Neoplasias Pulmonares/diagnóstico por imagen
9.
BMC Oral Health ; 24(1): 426, 2024 Apr 06.
Artículo en Inglés | MEDLINE | ID: mdl-38582843

RESUMEN

BACKGROUND: Dental development assessment is an important factor in dental age estimation and dental maturity evaluation. This study aimed to develop and evaluate the performance of an automated dental development staging system based on Demirjian's method using deep learning. METHODS: The study included 5133 anonymous panoramic radiographs obtained from the Department of Pediatric Dentistry database at Seoul National University Dental Hospital between 2020 and 2021. The proposed methodology involves a three-step procedure for dental staging: detection, segmentation, and classification. The panoramic data were randomly divided into training and validating sets (8:2), and YOLOv5, U-Net, and EfficientNet were trained and employed for each stage. The models' performance, along with the Grad-CAM analysis of EfficientNet, was evaluated. RESULTS: The mean average precision (mAP) was 0.995 for detection, and the segmentation achieved an accuracy of 0.978. The classification performance showed F1 scores of 69.23, 80.67, 84.97, and 90.81 for the Incisor, Canine, Premolar, and Molar models, respectively. In the Grad-CAM analysis, the classification model focused on the apical portion of the developing tooth, a crucial feature for staging according to Demirjian's method. CONCLUSIONS: These results indicate that the proposed deep learning approach for automated dental staging can serve as a supportive tool for dentists, facilitating rapid and objective dental age estimation and dental maturity evaluation.


Asunto(s)
Determinación de la Edad por los Dientes , Aprendizaje Profundo , Niño , Humanos , Radiografía Panorámica , Determinación de la Edad por los Dientes/métodos , Incisivo , Diente Molar
10.
J Biomed Opt ; 29(4): 046001, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38585417

RESUMEN

Significance: Endoscopic screening for esophageal cancer (EC) may enable early cancer diagnosis and treatment. While optical microendoscopic technology has shown promise in improving specificity, the limited field of view (<1 mm) significantly reduces the ability to survey large areas efficiently in EC screening. Aim: To improve the efficiency of endoscopic screening, we propose a novel concept of end-expandable endoscopic optical fiber probe for larger field of visualization and for the first time evaluate a deep-learning-based image super-resolution (DL-SR) method to overcome the issue of limited sampling capability. Approach: To demonstrate feasibility of the end-expandable optical fiber probe, DL-SR was applied on simulated low-resolution microendoscopic images to generate super-resolved (SR) ones. Varying the degradation model of image data acquisition, we identified the optimal parameters for optical fiber probe prototyping. The proposed screening method was validated with a human pathology reading study. Results: For various degradation parameters considered, the DL-SR method demonstrated different levels of improvement of traditional measures of image quality. The endoscopists' interpretations of the SR images were comparable to those performed on the high-resolution ones. Conclusions: This work suggests avenues for development of DL-SR-enabled sparse image reconstruction to improve high-yield EC screening and similar clinical applications.


Asunto(s)
Esófago de Barrett , Aprendizaje Profundo , Neoplasias Esofágicas , Humanos , Fibras Ópticas , Neoplasias Esofágicas/diagnóstico por imagen , Esófago de Barrett/patología , Procesamiento de Imagen Asistido por Computador
11.
Front Immunol ; 15: 1342285, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38576618

RESUMEN

B cell receptors (BCRs) denote antigen specificity, while corresponding cell subsets indicate B cell functionality. Since each B cell uniquely encodes this combination, physical isolation and subsequent processing of individual B cells become indispensable to identify both attributes. However, this approach accompanies high costs and inevitable information loss, hindering high-throughput investigation of B cell populations. Here, we present BCR-SORT, a deep learning model that predicts cell subsets from their corresponding BCR sequences by leveraging B cell activation and maturation signatures encoded within BCR sequences. Subsequently, BCR-SORT is demonstrated to improve reconstruction of BCR phylogenetic trees, and reproduce results consistent with those verified using physical isolation-based methods or prior knowledge. Notably, when applied to BCR sequences from COVID-19 vaccine recipients, it revealed inter-individual heterogeneity of evolutionary trajectories towards Omicron-binding memory B cells. Overall, BCR-SORT offers great potential to improve our understanding of B cell responses.


Asunto(s)
Subgrupos de Linfocitos B , Aprendizaje Profundo , Humanos , Filogenia , Vacunas contra la COVID-19 , Receptores de Antígenos de Linfocitos B/genética
12.
Sci Rep ; 14(1): 8253, 2024 04 08.
Artículo en Inglés | MEDLINE | ID: mdl-38589478

RESUMEN

This work presents a deep learning approach for rapid and accurate muscle water T2 with subject-specific fat T2 calibration using multi-spin-echo acquisitions. This method addresses the computational limitations of conventional bi-component Extended Phase Graph fitting methods (nonlinear-least-squares and dictionary-based) by leveraging fully connected neural networks for fast processing with minimal computational resources. We validated the approach through in vivo experiments using two different MRI vendors. The results showed strong agreement of our deep learning approach with reference methods, summarized by Lin's concordance correlation coefficients ranging from 0.89 to 0.97. Further, the deep learning method achieved a significant computational time improvement, processing data 116 and 33 times faster than the nonlinear least squares and dictionary methods, respectively. In conclusion, the proposed approach demonstrated significant time and resource efficiency improvements over conventional methods while maintaining similar accuracy. This methodology makes the processing of water T2 data faster and easier for the user and will facilitate the utilization of the use of a quantitative water T2 map of muscle in clinical and research studies.


Asunto(s)
Algoritmos , Aprendizaje Profundo , Agua , Calibración , Imagen por Resonancia Magnética/métodos , Músculos/diagnóstico por imagen , Fantasmas de Imagen , Procesamiento de Imagen Asistido por Computador/métodos , Encéfalo
13.
BMC Med Imaging ; 24(1): 83, 2024 Apr 08.
Artículo en Inglés | MEDLINE | ID: mdl-38589793

RESUMEN

The research focuses on the segmentation and classification of leukocytes, a crucial task in medical image analysis for diagnosing various diseases. The leukocyte dataset comprises four classes of images such as monocytes, lymphocytes, eosinophils, and neutrophils. Leukocyte segmentation is achieved through image processing techniques, including background subtraction, noise removal, and contouring. To get isolated leukocytes, background mask creation, Erythrocytes mask creation, and Leukocytes mask creation are performed on the blood cell images. Isolated leukocytes are then subjected to data augmentation including brightness and contrast adjustment, flipping, and random shearing, to improve the generalizability of the CNN model. A deep Convolutional Neural Network (CNN) model is employed on augmented dataset for effective feature extraction and classification. The deep CNN model consists of four convolutional blocks having eleven convolutional layers, eight batch normalization layers, eight Rectified Linear Unit (ReLU) layers, and four dropout layers to capture increasingly complex patterns. For this research, a publicly available dataset from Kaggle consisting of a total of 12,444 images of four types of leukocytes was used to conduct the experiments. Results showcase the robustness of the proposed framework, achieving impressive performance metrics with an accuracy of 97.98% and precision of 97.97%. These outcomes affirm the efficacy of the devised segmentation and classification approach in accurately identifying and categorizing leukocytes. The combination of advanced CNN architecture and meticulous pre-processing steps establishes a foundation for future developments in the field of medical image analysis.


Asunto(s)
Aprendizaje Profundo , Humanos , Curaduría de Datos , Leucocitos , Redes Neurales de la Computación , Células Sanguíneas , Procesamiento de Imagen Asistido por Computador/métodos
14.
JCO Clin Cancer Inform ; 8: e2300231, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38588476

RESUMEN

PURPOSE: Body composition (BC) may play a role in outcome prognostication in patients with gastroesophageal adenocarcinoma (GEAC). Artificial intelligence provides new possibilities to opportunistically quantify BC from computed tomography (CT) scans. We developed a deep learning (DL) model for fully automatic BC quantification on routine staging CTs and determined its prognostic role in a clinical cohort of patients with GEAC. MATERIALS AND METHODS: We developed and tested a DL model to quantify BC measures defined as subcutaneous and visceral adipose tissue (VAT) and skeletal muscle on routine CT and investigated their prognostic value in a cohort of patients with GEAC using baseline, 3-6-month, and 6-12-month postoperative CTs. Primary outcome was all-cause mortality, and secondary outcome was disease-free survival (DFS). Cox regression assessed the association between (1) BC at baseline and mortality and (2) the decrease in BC between baseline and follow-up scans and mortality/DFS. RESULTS: Model performance was high with Dice coefficients ≥0.94 ± 0.06. Among 299 patients with GEAC (age 63.0 ± 10.7 years; 19.4% female), 140 deaths (47%) occurred over a median follow-up of 31.3 months. At baseline, no BC measure was associated with DFS. Only a substantial decrease in VAT >70% after a 6- to 12-month follow-up was associated with mortality (hazard ratio [HR], 1.99 [95% CI, 1.18 to 3.34]; P = .009) and DFS (HR, 1.73 [95% CI, 1.01 to 2.95]; P = .045) independent of age, sex, BMI, Union for International Cancer Control stage, histologic grading, resection status, neoadjuvant therapy, and time between surgery and follow-up CT. CONCLUSION: DL enables opportunistic estimation of BC from routine staging CT to quantify prognostic information. In patients with GEAC, only a substantial decrease of VAT 6-12 months postsurgery was an independent predictor for DFS beyond traditional risk factors, which may help to identify individuals at high risk who go otherwise unnoticed.


Asunto(s)
Adenocarcinoma , Aprendizaje Profundo , Humanos , Femenino , Persona de Mediana Edad , Anciano , Masculino , Inteligencia Artificial , Pronóstico , Adenocarcinoma/diagnóstico por imagen , Adenocarcinoma/cirugía , Composición Corporal
15.
Mikrochim Acta ; 191(5): 255, 2024 04 10.
Artículo en Inglés | MEDLINE | ID: mdl-38594377

RESUMEN

Perovskite quantum dots (PQDs) are novel nanomaterials wherein perovskites are used to formulate quantum dots (QDs). The present study utilizes the excellent fluorescence quantum yields of these nanomaterials to detect 16S rRNA of circulating microbiome for risk assessment of cardiovascular diseases (CVDs). A long short-term memory (LSTM) deep learning model was used to find the association of the circulating bacterial species with CVD risk, which showed the abundance of three different bacterial species (Bauldia litoralis (BL), Hymenobacter properus (HYM), and Virgisporangium myanmarense (VIG)). The observations suggested that the developed nano-sensor provides high sensitivity, selectivity, and applicability. The observed sensitivities for Bauldia litoralis, Hymenobacter properus, and Virgisporangium myanmarense were 0.606, 0.300, and 0.281 fg, respectively. The developed sensor eliminates the need for labelling, amplification, quantification, and biochemical assessments, which are more labour-intensive, time-consuming, and less reliable. Due to the rapid detection time, user-friendly nature, and stability, the proposed method has a significant advantage in facilitating point-of-care testing of CVDs in the future. This may also facilitate easy integration of the approach into various healthcare settings, making it accessible and valuable for resource-constrained environments.


Asunto(s)
Alphaproteobacteria , Compuestos de Calcio , Enfermedades Cardiovasculares , Aprendizaje Profundo , Micromonosporaceae , Óxidos , Puntos Cuánticos , Titanio , Humanos , ARN Ribosómico 16S/genética , Enfermedades Cardiovasculares/diagnóstico
16.
Biomed Eng Online ; 23(1): 41, 2024 Apr 09.
Artículo en Inglés | MEDLINE | ID: mdl-38594729

RESUMEN

BACKGROUND: The timely identification and management of ovarian cancer are critical determinants of patient prognosis. In this study, we developed and validated a deep learning radiomics nomogram (DLR_Nomogram) based on ultrasound (US) imaging to accurately predict the malignant risk of ovarian tumours and compared the diagnostic performance of the DLR_Nomogram to that of the ovarian-adnexal reporting and data system (O-RADS). METHODS: This study encompasses two research tasks. Patients were randomly divided into training and testing sets in an 8:2 ratio for both tasks. In task 1, we assessed the malignancy risk of 849 patients with ovarian tumours. In task 2, we evaluated the malignancy risk of 391 patients with O-RADS 4 and O-RADS 5 ovarian neoplasms. Three models were developed and validated to predict the risk of malignancy in ovarian tumours. The predicted outcomes of the models for each sample were merged to form a new feature set that was utilised as an input for the logistic regression (LR) model for constructing a combined model, visualised as the DLR_Nomogram. Then, the diagnostic performance of these models was evaluated by the receiver operating characteristic curve (ROC). RESULTS: The DLR_Nomogram demonstrated superior predictive performance in predicting the malignant risk of ovarian tumours, as evidenced by area under the ROC curve (AUC) values of 0.985 and 0.928 for the training and testing sets of task 1, respectively. The AUC value of its testing set was lower than that of the O-RADS; however, the difference was not statistically significant. The DLR_Nomogram exhibited the highest AUC values of 0.955 and 0.869 in the training and testing sets of task 2, respectively. The DLR_Nomogram showed satisfactory fitting performance for both tasks in Hosmer-Lemeshow testing. Decision curve analysis demonstrated that the DLR_Nomogram yielded greater net clinical benefits for predicting malignant ovarian tumours within a specific range of threshold values. CONCLUSIONS: The US-based DLR_Nomogram has shown the capability to accurately predict the malignant risk of ovarian tumours, exhibiting a predictive efficacy comparable to that of O-RADS.


Asunto(s)
Aprendizaje Profundo , Neoplasias Ováricas , Humanos , Femenino , Nomogramas , 60570 , Neoplasias Ováricas/diagnóstico por imagen , Ultrasonografía , Estudios Retrospectivos
17.
J Thorac Imaging ; 39(3): 194-199, 2024 May 01.
Artículo en Inglés | MEDLINE | ID: mdl-38640144

RESUMEN

PURPOSE: To develop and evaluate a deep convolutional neural network (DCNN) model for the classification of acute and chronic lung nodules from nontuberculous mycobacterial-lung disease (NTM-LD) on computed tomography (CT). MATERIALS AND METHODS: We collected a data set of 650 nodules (316 acute and 334 chronic) from the CT scans of 110 patients with NTM-LD. The data set was divided into training, validation, and test sets in a ratio of 4:1:1. Bounding boxes were used to crop the 2D CT images down to the area of interest. A DCNN model was built using 11 convolutional layers and trained on these images. The performance of the model was evaluated on the hold-out test set and compared with that of 3 radiologists who independently reviewed the images. RESULTS: The DCNN model achieved an area under the receiver operating characteristic curve of 0.806 for differentiating acute and chronic NTM-LD nodules, corresponding to sensitivity, specificity, and accuracy of 76%, 68%, and 72%, respectively. The performance of the model was comparable to that of the 3 radiologists, who had area under the receiver operating characteristic curve, sensitivity, specificity, and accuracy of 0.693 to 0.771, 61% to 82%, 59% to 73%, and 60% to 73%, respectively. CONCLUSIONS: This study demonstrated the feasibility of using a DCNN model for the classification of the activity of NTM-LD nodules on chest CT. The model performance was comparable to that of radiologists. This approach can potentially and efficiently improve the diagnosis and management of NTM-LD.


Asunto(s)
Aprendizaje Profundo , Neoplasias Pulmonares , Neumonía , Humanos , Redes Neurales de la Computación , Tomografía Computarizada por Rayos X/métodos , Interpretación de Imagen Radiográfica Asistida por Computador/métodos , Estudios Retrospectivos , Neoplasias Pulmonares/diagnóstico por imagen
18.
BMC Med Inform Decis Mak ; 24(1): 102, 2024 Apr 19.
Artículo en Inglés | MEDLINE | ID: mdl-38641580

RESUMEN

The main cause of fetal death, of infant morbidity or mortality during childhood years is attributed to congenital anomalies. They can be detected through a fetal morphology scan. An experienced sonographer (with more than 2000 performed scans) has the detection rate of congenital anomalies around 52%. The rates go down in the case of a junior sonographer, that has the detection rate of 32.5%. One viable solution to improve these performances is to use Artificial Intelligence. The first step in a fetal morphology scan is represented by the differentiation process between the view planes of the fetus, followed by a segmentation of the internal organs in each view plane. This study presents an Artificial Intelligence empowered decision support system that can label anatomical organs using a merger between deep learning and clustering techniques, followed by an organ segmentation with YOLO8. Our framework was tested on a fetal morphology image dataset that regards the fetal abdomen. The experimental results show that the system can correctly label the view plane and the corresponding organs on real-time ultrasound movies.Trial registrationThe study is registered under the name "Pattern recognition and Anomaly Detection in fetal morphology using Deep Learning and Statistical Learning (PARADISE)", project number 101PCE/2022, project code PN-III-P4-PCE-2021-0057. Trial registration: ClinicalTrials.gov, unique identifying number NCT05738954, date of registration 02.11.2023.


Asunto(s)
Aprendizaje Profundo , Humanos , Inteligencia Artificial , Feto/diagnóstico por imagen
19.
BMC Med Imaging ; 24(1): 92, 2024 Apr 19.
Artículo en Inglés | MEDLINE | ID: mdl-38641591

RESUMEN

BACKGROUND: The study aimed to develop and validate a deep learning-based Computer Aided Triage (CADt) algorithm for detecting pleural effusion in chest radiographs using an active learning (AL) framework. This is aimed at addressing the critical need for a clinical grade algorithm that can timely diagnose pleural effusion, which affects approximately 1.5 million people annually in the United States. METHODS: In this multisite study, 10,599 chest radiographs from 2006 to 2018 were retrospectively collected from an institution in Taiwan to train the deep learning algorithm. The AL framework utilized significantly reduced the need for expert annotations. For external validation, the algorithm was tested on a multisite dataset of 600 chest radiographs from 22 clinical sites in the United States and Taiwan, which were annotated by three U.S. board-certified radiologists. RESULTS: The CADt algorithm demonstrated high effectiveness in identifying pleural effusion, achieving a sensitivity of 0.95 (95% CI: [0.92, 0.97]) and a specificity of 0.97 (95% CI: [0.95, 0.99]). The area under the receiver operating characteristic curve (AUC) was 0.97 (95% DeLong's CI: [0.95, 0.99]). Subgroup analyses showed that the algorithm maintained robust performance across various demographics and clinical settings. CONCLUSION: This study presents a novel approach in developing clinical grade CADt solutions for the diagnosis of pleural effusion. The AL-based CADt algorithm not only achieved high accuracy in detecting pleural effusion but also significantly reduced the workload required for clinical experts in annotating medical data. This method enhances the feasibility of employing advanced technological solutions for prompt and accurate diagnosis in medical settings.


Asunto(s)
Aprendizaje Profundo , Derrame Pleural , Humanos , Radiografía Torácica/métodos , Estudios Retrospectivos , Radiografía , Derrame Pleural/diagnóstico por imagen
20.
BMC Bioinformatics ; 25(1): 156, 2024 Apr 20.
Artículo en Inglés | MEDLINE | ID: mdl-38641811

RESUMEN

BACKGROUND: Accurately identifying drug-target interaction (DTI), affinity (DTA), and binding sites (DTS) is crucial for drug screening, repositioning, and design, as well as for understanding the functions of target. Although there are a few online platforms based on deep learning for drug-target interaction, affinity, and binding sites identification, there is currently no integrated online platforms for all three aspects. RESULTS: Our solution, the novel integrated online platform Drug-Online, has been developed to facilitate drug screening, target identification, and understanding the functions of target in a progressive manner of "interaction-affinity-binding sites". Drug-Online platform consists of three parts: the first part uses the drug-target interaction identification method MGraphDTA, based on graph neural networks (GNN) and convolutional neural networks (CNN), to identify whether there is a drug-target interaction. If an interaction is identified, the second part employs the drug-target affinity identification method MMDTA, also based on GNN and CNN, to calculate the strength of drug-target interaction, i.e., affinity. Finally, the third part identifies drug-target binding sites, i.e., pockets. The method pt-lm-gnn used in this part is also based on GNN. CONCLUSIONS: Drug-Online is a reliable online platform that integrates drug-target interaction, affinity, and binding sites identification. It is freely available via the Internet at http://39.106.7.26:8000/Drug-Online/ .


Asunto(s)
Aprendizaje Profundo , Interacciones Farmacológicas , Sitios de Unión , Sistemas de Liberación de Medicamentos , Evaluación Preclínica de Medicamentos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...